摘要 :
Databases today are carefully engineered. thee is an expensive and deliberate design process, after which a database schema is defined; during this design process, various possible instance examples and use cases are hypothesized ...
展开
Databases today are carefully engineered. thee is an expensive and deliberate design process, after which a database schema is defined; during this design process, various possible instance examples and use cases are hypothesized and carefully analyzed, finally, the schema is ready and then car be populated with data. All of this effort is a major barrier to database adoption. In this paper, we explore the possibility of organic database creation instead of the traditional engineered approach. The idea is to let the use- start storing data in a database with a schema that is just enough to cove the instances at hand. We then support efficient schema evolution as new data instances arrive. By designing the database to evolve, we can sidestep the expensve front-end cost of carefully engineering the desigr, of the database. The same set of issues also apply to database querying. Today, databases expect queries to he carefully specifed, and to be valid with respect to the database schema. In contrast, the organic query specification model would allow users to construct queries incrementally, with little knowledge of the database. We also examine this problem in this paper.
收起
摘要 :
Databases today are carefully engineered: there is an expensive and deliberate design process, after which a database schema is defined; during this design process, various possible instance examples and use cases are hypothesized...
展开
Databases today are carefully engineered: there is an expensive and deliberate design process, after which a database schema is defined; during this design process, various possible instance examples and use cases are hypothesized and carefully analyzed; finally, the schema is ready and then can be populated with data. All of this effort is a major barrier to database adoption. In this paper, we explore the possibility of organic database creation instead of the traditional engineered approach. The idea is to let the user start storing data in a database with a schema that is just enough to cove the instances at hand. We then support efficient schema evolution as new data instances arrive. By designing the database to evolve, we can sidestep the expensive front-end cost of carefully engineering the design of the database. The same set of issues also apply to database querying. Today, databases expect queries to be carefully specified, and to be valid with respect to the database schema. In contrast, the organic query specification model would allow users to construct queries incrementally, with little knowledge of the database. We also examine this problem in this paper.
收起
摘要 :
This paper presents an efficient context-sensitive, field-based Andersen-style points-to analysis algorithm for Java programs. This algorithm first summarizes methods of the program under analysis using directed graphs. Then it pe...
展开
This paper presents an efficient context-sensitive, field-based Andersen-style points-to analysis algorithm for Java programs. This algorithm first summarizes methods of the program under analysis using directed graphs. Then it performs local circle elimination on these summary graphs to reduce their sizes. The main analysis algorithm uses these graphs to construct the main points-to graph. Topological sort and cycle-elimination is performed on the nodes of both main points-to graphs and summary graphs to speed up the transitive closure computation on the main points-to graph. A suite of Java program benchmarks are used to demonstrate the efficiency of our algorithm.
收起
摘要 :
This paper presents an efficient context-sensitive, field-based Andersen-style points-to analysis algorithm for Java programs. This algorithm first summarizes methods of the program under analysis using directed graphs. Then it pe...
展开
This paper presents an efficient context-sensitive, field-based Andersen-style points-to analysis algorithm for Java programs. This algorithm first summarizes methods of the program under analysis using directed graphs. Then it performs local circle elimination on these summary graphs to reduce their sizes. The main analysis algorithm uses these graphs to construct the main points-to graph. Topological sort and cycle-elimination is performed on the nodes of both main points-to graphs and summary graphs to speed up the transitive closure computation on the main points-to graph. A suite of Java program benchmarks are used to demonstrate the efficiency of our algorithm.
收起
摘要 :
Manufacturers design and develop numerous product variants to address different customer preferences in the competitive market. One product could be characterized by a vector of attributes such as sale price, reliability, and func...
展开
Manufacturers design and develop numerous product variants to address different customer preferences in the competitive market. One product could be characterized by a vector of attributes such as sale price, reliability, and functionality. The challenge is how to make decisions on product or product family planning and design, supply chain, and marketing in a concurrent and integrated manner. Research on integration and coordination of product design, supply chain configuration, and marketing decisions is receiving much attention recently and need further investigation. The paper provides a comprehensive review on recent research incorporating marketing, management and engineering considerations in product planning and design.
收起
摘要 :
Manufacturers design and develop numerous product variants to address different customer preferences in the competitive market. One product could be characterized by a vector of attributes such as sale price, reliability, and func...
展开
Manufacturers design and develop numerous product variants to address different customer preferences in the competitive market. One product could be characterized by a vector of attributes such as sale price, reliability, and functionality. The challenge is how to make decisions on product or product family planning and design, supply chain, and marketing in a concurrent and integrated manner. Research on integration and coordination of product design, supply chain configuration, and marketing decisions is receiving much attention recently and need further investigation. The paper provides a comprehensive review on recent research incorporating marketing, management and engineering considerations in product planning and design.
收起
摘要 :
This paper intends to study integration of design curriculum and manufacturing curriculum via virtual prototyping. Design and manufacturing are two important subject areas in most engineering schools. Various courses are offered i...
展开
This paper intends to study integration of design curriculum and manufacturing curriculum via virtual prototyping. Design and manufacturing are two important subject areas in most engineering schools. Various courses are offered in these two areas. However under the current curriculum setting, the design program and manufacturing program have been developed discretely without regard to the potential benefits provided by the integration of both of them due to lack of a curricular bridge to properly link them together. Virtual prototyping, which is also called dynamic motion simulation, is one possible solution to this problem. Virtual prototyping usually is delivered in a computational multibody dynamics (CMD) course. The CMD course is designed to build basic motion and force analysis skills of a student to inform his/her design and make the design ready for manufacturing. Introducing standalone computational multibody dynamics course is the first alternative to tie design and manufacturing together via virtual prototyping. Virtual prototyping can also be included as a section of a computer-aided design/computer-aided manufacturing (CAD/CAM) course to link design and manufacturing. A new course entitled Applied Multibody Dynamics was initiated in the mechanical engineering program at South Dakota State University. This new course has addressed the need for engineering design linked to manufacturing. To make the course substantially fulfill its role of a bridge between the design curriculum and manufacturing curriculum, the course outcomes have been tied to the students' senior design projects. Student surveys and course assessments indicate that the course plan and design provides a promising solution to the need for integration between design curriculum and manufacturing curriculum.
收起
摘要 :
Recent years have witnessed increasing interests in prompt-based learning in which models can be trained on only a few annotated instances, making them suitable in low-resource settings. When using prompt-based learning for text c...
展开
Recent years have witnessed increasing interests in prompt-based learning in which models can be trained on only a few annotated instances, making them suitable in low-resource settings. When using prompt-based learning for text classification, the goal is to use a pre-trained language model (PLM) to predict a missing token in a pre-defined template given an input text, which can be mapped to a class label. However, PLMs built on the transformer architecture tend to generate similar output em-beddings, making it difficult to discriminate between different class labels. The problem is further exacerbated when dealing with classification tasks involving many fine-grained class labels. In this work, we alleviate this information diffusion issue, i.e., different tokens share a large proportion of similar information after going through stacked multiple self-attention layers in a transformer, by proposing a calibration method built on feature transformations through rotation and scaling to map a PLM-encoded embedding into a new metric space to guarantee the distinguishability of the resulting embeddings. Furthermore, we take the advantage of hyperbolic embeddings to capture the hierarchical relations among fine-grained class-associated token embedding by a coarse-to-fine metric learning strategy to enhance the distinguishability of the learned output embeddings. Extensive experiments on the three datasets under various settings demonstrate the effectiveness of our approach.
收起
摘要 :
Because absolute quantity thermal laser energy meter based on conical cavity has some features, for example,wavelength adaptation range is wide and laser damage threshold value is high. It is used for the standard of thehigh-energ...
展开
Because absolute quantity thermal laser energy meter based on conical cavity has some features, for example,wavelength adaptation range is wide and laser damage threshold value is high. It is used for the standard of thehigh-energy laser energy meter and extensively in the domain of the high energy laser measurement. However, laserenergy will lose because of the heat exchange and the back scattering of the conical absorption cavity. Therefore, onlyafter compensating and amending the loss, the exact measurement of the laser energy can be achieved. Aimed to theenergy loss compensation problem of the conical cavity high-energy laser energy meter, firstly, according to the heattransfer theory, this paper analyzes the heat energy loss of the conical cavity due to the heat emission, the heat convectionand the heat exchange, and construct the mathematical model of the heat energy loss, based on which measuring result iscurved fit using the least squares technique, and is compensated and amended utilizing the fitting curve, whosemeasurement repetitiveness is 0.7%, from which we can know that measuring repetitiveness is increased consumedly.Secondly, according to the optics principles of reciprocity of the conical cavity inner face and the incident laser andutilizing complexification Simpson numerical method, the mathematical model of conical cavity jaw opening opticalpower density distribution and back scattering gross power is established, based on which measuring result iscompensated and amended, the back scattering energy loss is about 0.5% to 2.5%, high-energy laser energy measuringaccuracy is improved availably.
收起
摘要 :
We present three new interferometric techniques for dispersion characterization covering from millimeter waveguides to kilometers of fibers. The first is a Frequency-Shifted Interferometer (FSI) that measures fibers from meters to...
展开
We present three new interferometric techniques for dispersion characterization covering from millimeter waveguides to kilometers of fibers. The first is a Frequency-Shifted Interferometer (FSI) that measures fibers from meters to tens of kilometers. The second is a three-wave Single-Arm Interferometer (SAI), where the envelope of a three-wave interference pattern yields the second-order dispersion directly. It is suitable for fibers from centimeters to >1m. The third is a Common-Path Interferometer (CPI) that measures dispersion of millimeter-long fibers/waveguides. These techniques offer high precision in their respective ranges, and are all "single-arm" interferometers: the two interfering beams go through the same arm of the interferometer. They are simple, low-cost, and more resilient to phase and polarization instabilities than conventional interferometric techniques for dispersion measurement.
收起